8 research outputs found

    Low Power Depth Estimation of Rigid Objects for Time-of-Flight Imaging

    Full text link
    Depth sensing is useful in a variety of applications that range from augmented reality to robotics. Time-of-flight (TOF) cameras are appealing because they obtain dense depth measurements with minimal latency. However, for many battery-powered devices, the illumination source of a TOF camera is power hungry and can limit the battery life of the device. To address this issue, we present an algorithm that lowers the power for depth sensing by reducing the usage of the TOF camera and estimating depth maps using concurrently collected images. Our technique also adaptively controls the TOF camera and enables it when an accurate depth map cannot be estimated. To ensure that the overall system power for depth sensing is reduced, we design our algorithm to run on a low power embedded platform, where it outputs 640x480 depth maps at 30 frames per second. We evaluate our approach on several RGB-D datasets, where it produces depth maps with an overall mean relative error of 0.96% and reduces the usage of the TOF camera by 85%. When used with commercial TOF cameras, we estimate that our algorithm can lower the total power for depth sensing by up to 73%

    Algorithms and systems for low power time-of-flight imaging

    No full text
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020Cataloged from the official PDF of thesis.Includes bibliographical references (pages 151-158).Depth sensing is useful for many emerging applications that range from augmented reality to robotic navigation. Time-of-flight (ToF) cameras are appealing depth sensors because they obtain dense depth maps with minimal latency. However, for mobile and embedded devices, ToF cameras, which obtain depth by emitting light and estimating its roundtrip time, can be power-hungry and limit the battery life of the underlying device. To reduce the power for depth sensing, we present algorithms to address two scenarios. For applications where RGB images are concurrently collected, we present algorithms that reduce the usage of the ToF camera and estimate new depth maps without illuminating the scene. We exploit the fact that many applications operate in nearly rigid environments, and our algorithms use the sparse correspondences across the consecutive RGB images to estimate the rigid motion and use it to obtain new depth maps.Our techniques can reduce the usage of the ToF camera by up to 85%, while still estimating new depth maps within 1% of the ground truth for rigid scenes and 1.74% for dynamic ones. When only the data from a ToF camera is used, we propose algorithms that reduce the overall amount of light that the ToF camera emits to obtain accurate depth maps. Our techniques use the rigid motions in the scene, which can be estimated using the infrared images that a ToF camera obtains, to temporally mitigate the impact of noise. We show that our approaches can reduce the amount of emitted light by up to 81% and the mean relative error of the depth maps by up to 64%. Our algorithms are all computationally efficient and can obtain dense depth maps at up to real-time on standard and embedded computing platforms.Compared to applications that just use the ToF camera and incur the cost of higher sensor power and to those that estimate depth entirely using RGB images, which are inaccurate and have high latency, our algorithms enable energy-efficient, accurate, and low latency depth sensing for many emerging applications.by James Noraky.Ph. D.Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienc

    A spectral approach to noninvasive model-based estimation of intracranial pressure

    No full text
    Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.Cataloged from PDF version of thesis.Includes bibliographical references (pages 60-61).Intracranial pressure (ICP) is the hydrostatic pressure of the cerebrospinal fluid. Ideally, ICP should be monitored in many neuropathological conditions, as elevated ICP is correlated with poor neurocognitive outcomes after injuries to the brain. Measuring ICP requires the surgical placement of a sensor or catheter into the brain tissue or cerebrospinal fluid spaces of the brain. With the risk of infection and brain damage, ICP is only measured in a small subset of those patients whose treatment could benefit from knowing ICP. We expand on a previously proposed model-based time-domain approach to noninvasive, patient-specific and continuous estimation of ICP using routinely measured waveforms that has been validated on patients with traumatic brain injuries. Here, we present a model-based algorithm to estimate ICP using the functional relationship between the spectral densities of the routinely measured waveforms. We applied this algorithm to both a simulated and clinical dataset. For the simulated dataset, we achieved a mean error (bias) of 1.2 mmHg and a standard deviation of error (SDE) of 2.2 mmHg. For the clinical dataset of patients with traumatic brain injuries, we achieved a bias of 13.7 mmHg and a SDE of 15.0 mmHg. While the clinical results are not favorable, we describe sources of estimation error and future directions of research to improve the ICP estimates.by James Noraky.M. Eng

    Low Power Depth Estimation of Rigid Objects for Time-of-Flight Imaging

    No full text
    Depth sensing is useful in a variety of applications that range from augmented reality to robotics. Time-of-flight (TOF) cameras are appealing because they obtain dense depth measurements with minimal latency. However, for many battery-powered devices, the illumination source of a TOF camera is power hungry and can limit the battery life of the device. To address this issue, we present an algorithm that lowers the power for depth sensing by reducing the usage of the TOF camera and estimating depth maps using concurrently collected images. Our technique also adaptively controls the TOF camera and enables it when an accurate depth map cannot be estimated. To ensure that the overall system power for depth sensing is reduced, we design our algorithm to run on a low power embedded platform, where it outputs 640 × 480 depth maps at 30 frames per second. We evaluate our approach on several RGB-D datasets, where it produces depth maps with an overall mean relative error of 0.96% and reduces the usage of the TOF camera by 85%. When used with commercial TOF cameras, we estimate that our algorithm can lower the total power for depth sensing by up to 73%

    Depth Estimation of Non-Rigid Objects For Time-Of-Flight Imaging

    No full text
    Depth sensing is useful for a variety of applications that range from augmented reality to robotics. Time-of-flight (TOF) cameras are appealing because they obtain dense depth measurements with low latency. However, for reasons ranging from power constraints to multi-camera interference, the frequency at which accurate depth measurements can be obtained is reduced. To address this, we propose an algorithm that uses concurrently collected images to estimate the depth of non-rigid objects without using the TOF camera. Our technique models non-rigid objects as locally rigid and uses previous depth measurements along with the optical flow of the images to estimate depth. In particular, we show how we exploit the previous depth measurements to directly estimate pose and how we integrate this with our model to estimate the depth of non-rigid objects by finding the solution to a sparse linear system. We evaluate our technique on a RGB-D dataset of deformable objects, where we estimate depth with a mean relative error of 0.37% and outperform other adapted techniques

    Low Power Adaptive Time-of-Flight Imaging for Multiple Rigid Objects

    No full text
    © 2019 IEEE. Time-of-flight (TOF) cameras are becoming increasingly popular for many mobile applications. To obtain accurate depth maps, TOF cameras must emit many pulses of light, which consumes a lot of power and lowers the battery life of mobile devices. However, lowering the number of emitted pulses results in noisy depth maps. To obtain accurate depth maps while reducing the overall number of emitted pulses, we propose an algorithm that adaptively varies the number of pulses to infrequently obtain high power depth maps and uses them to help estimate subsequent low power ones. To estimate these depth maps, our technique uses the previous frame by accounting for the 3D motion in the scene. We assume that the scene contains independently moving rigid objects and show that we can efficiently estimate the motions using just the data from a TOF camera. The resulting algorithm estimates 640 × 480 depth maps at 30 frames per second on an embedded processor. We evaluate our approach on data collected with a pulsed TOF camera and show that we can reduce the mean relative error of the low power depth maps by up to 64% and the number of emitted pulses by up to 81%
    corecore